Finite-Element Mesh Generation Using Self-Organizing Neural Networks
نویسندگان
چکیده
Neural networks are applied to the problem of mesh placement for the finite-element method. When the finiteelement method is used to numerically solve a partial differential equation with boundary conditions over a domain, the domain must be divided into “elements.” The precise placement of the nodes of the elements has a major affect on the accuracy of the numeric method. In this paper the self-organizing algorithm of Kohonen is adapted to solve the problem of automatically assigning (in a near-optimal way) coordinates from a two-dimensional domain to a given topologic grid (or mesh) of nodes in order to apply the finiteelement method effectively when solving a partial differential equation with boundary conditions over that domain. One novelty of the method is the interweaving of versions of the Kohonen algorithm in different dimensions simultaneously in order to handle the boundary of the domain properly. Our method allows for the use of arbitrary types of twodimensional elements (in particular, quadrilaterals or mixed shapes as opposed to just triangles) and for varying desired densities over the domain. (Thus more elements can be placed automatically near “areas of interest.”) The methods and experiments developed here are for twodimensional domains but seem naturally extendible to higherdimensional problems. The method uses a mixture of both oneand two-dimensional versions of the Kohonen algo∗ To whom correspondence should be addressed. rithm, with an improvement suggested by Tabakman and Exman, and further adapted to the particular problem here. Experimental results comparing this algorithm with a wellknown two-dimensional grid-generating system (PLTMG) are presented. 1 BACKGROUND OF THE APPLICATION The finite-element method (FEM) is a computationally intensive method for solving partial differential equations. Effective use of the method requires setting up the computational framework in an appropriate manner, which typically requires expertise. In more detail, when applying the FEM to a given domain, one has to divide the domain into a finite number of nonoverlapping subdomains (elements). (In two dimensions, the elements are usually triangles or quadrilaterals.) One also has to define a finite number of nodes, which are the vertices of the elements, and possibly other points as well. The collections of elements and nodes (and the connections among them) constitutes the finite-element mesh, whose quality is an essential ingredient in achieving accurate and reliable numeric results for all finite-element codes. The computational cost of generating the mesh may be much lower, comparable, or in some cases higher than the cost associated with the numeric solver of the partial differential equations, depending © 1997 Microcomputers in Civil Engineering. Published by Blackwell Publishers, 350 Main Street, Malden, MA 02148, USA, and 108 Cowley Road, Oxford OX4 1JF, UK. 234 L. Manevitz, M. Yousef & D. Givoli on the application and the specific numeric scheme at hand. See, for example, ref. 2 for details. To achieve a high-quality mesh, one has to (1) decide on the appropriate size and topology of the mesh, (2) decide how it should be placed on the domain, and (3) afterwards make decisions regarding the organization of the data on the mesh that have an affect on the ease of computation. Each of these areas requires expertise. In this paper we apply the methods of neural networks, in particular, self-organizing neural networks, to the automation of the second of these problems, i.e., given the number of nodes and the mesh’s topology, deciding how to place the mesh on the domain in such a way as to optimize the productivity of the finite-element method. (For work concerning the third point, i.e. efficient numbering of the nodes, see ref. 5.) The density of the mesh affects the accuracy of the finiteelement results. A finer mesh would give more accurate solutions but also would necessitate a larger computational effort. Thus the actual density of the mesh used in a certain computation is a compromise between accuracy and cost. The main parameter that controls the density of the mesh is called the mesh parameter; this is roughly the size of the largest element in the mesh. Of course, the density of the mesh should not necessarily be uniform. The mesh may be finer in some regions and coarser in others. The problem of generating and placing a mesh, say, in two dimensions, is not merely a problem of dividing a given area into nonoverlapping triangles and/or quadrilaterals of a given maximum size. This is so because finite-element meshes must have certain properties in order to be acceptable for computation. The following guidelines are considered standard. In stating them, we refer to the two-dimensional case for simplicity. 1. The mesh should be finer in regions where the solution is believed to be changing rapidly or to have large gradients. Thus smaller elements should be used near singularity points such as reentrant corners or cracks, near holes, near small features of the boundary, near the location of rapidly changing boundary data, at and near inhomogeneities, etc. 2. All elements should be well proportioned. The aspect ratio of the element (namely, the ratio between its largest and smallest dimensions) should be close to unity. Square elements are the best quadrilaterals, but even an aspect ratio of 1.5 or 2 is acceptable. 3. All interior angles of the element must be significantly smaller than 180 degrees. For example, a quadrilateral with three of its vertices lying on a nearly straight line is usually unacceptable. 4. Transition from large elements to small elements must be made gradually. The ratio between the sizes of two neighboring elements may be 1.5 or 2 but should not be much greater than this. In the early days of the FEM (in the sixties and early seventies), finite-element meshes were produced manually. This was a tedious task that also easily admitted errors in the data description. As the method was applied to successively larger problems, time for mesh preparation also became prohibitive. These difficulties have been alleviated by the development of automatic mesh-generation algorithms. Currently, there are several schemes in use for automatic mesh generation. One major class of schemes is based on conformal mapping. See, for example, refs. 8, 11, and 14. Here, a regular mesh in a simple domain (e.g., a rectangle) is mapped into the actual domain under consideration using numeric conformal mapping. This procedure can produce high-quality meshes but is sometimes expensive. The high computational cost of this method is due to the fact that it requires the solution of Laplace’s equation (or some equivalent computational effort) to generate the mesh prior to the solution of the partial differential equation under consideration. Other two-dimensional schemes are triangulation schemes, which produce, by some construction, meshes made of triangles. See, for example, refs. 1 and 12 through 15. Although it is well known that quadrilaterals usually perform better than triangular elements of the same order,7 it is much easier in general to generate an acceptable triangulation than an acceptable mesh composed of quadrilaterals. Threedimensional mesh generation is yet much more complicated. Because of this, for the three-dimensional case, tetrahedral schemes completely dominate, although it is known that they are not always a good choice for elements, in terms of accuracy. A review and discussion of various mesh generation methods can be found in two recent books.4,9 2 METHODS OF THIS PAPER Our approach has been to split this rather complicated global optimization problem into several parts. On the one hand, there are the decisions regarding the size of the mesh, the kinds of elements, and the appropriate densities in different regions of the domain, while on the other hand, one has to realize the mesh by specific assignments of geometric coordinates to the nodes. We anticipate the first part being accomplished by an expert system(s) that will, based on geometric and physical considerations, decide on the regions of interest and desired density distribution of the elements; the second part (which is the work presented here) is realized by selforganizing neural networks.10 A self-organizing neural network, as described, for example, by Kohonen,10 is a system of neurons linked by a topology. Such a network can then learn to adjust its weight parameters, based on the input, in such a way as to automatically create a map of responsive neurons that topologically resembles the input data. The map so generated should in principle Finite-Element Mesh Generation Using Self-Organizing Neural Networks 235 automatically favor most of the heuristic rules stated above, and so the map can be taken as the placement of the mesh. The methods presented here are independent of the specific topology chosen for the mesh. While the meshes we have used in our experiments have been chosen to consist of quadrilaterals, in principle, our methods will work for any mesh, even mixed triangular and quadrilateral ones. (As stated earlier, while it is known that quadrilateral meshes generally give better results than triangular ones,7 the accepted methods of placing the mesh are more developed in the case of triangles.14) To evaluate our methods, we compared our results with a fully developed automatic mesh generator (for triangles) PLTMG1 by comparing the results in solving a series of boundary value problems over a two-dimensional domain. The appropriate placement of the mesh has as a heuristic component the choice of the density function that expresses which part of the domain should be approximated more closely than others (typically the density is higher near corners or other interesting geometric phenomena where the linear approximation inside an element is intrinsically worse). Note that the best density choice may actually depend on the solution to the differential equation. Nonetheless, in many cases qualitative information is available prior to the solution of the equation (e.g., from the boundary conditions) so that it is not unreasonable for a system to generate an appropriate density function based on the statement of the partial differential equation problem and boundary conditions. For example, it is expected that the solution exhibits high gradients near a sharp corner in the boundary. In such cases, a density function may be chosen to exploit this information. If nothing is known a priori about the solution, a density function may still be chosen in an adaptive a posteriori fashion. For example, the problem may be solved preliminarily with a uniform-density mesh (perhaps with relatively few nodes), and then a density function may be constructed based on the gradients of this solution prior to resolving it. (In our examples we chose the density function by hand, with the knowledge of the available exact solutions.) Mesh quality can be judged by eye in the two-dimensional case. However, we also found it necessary to define an analytic measure of the quality. Below we describe this measure of mesh quality; essentially it is a mathematical realization of the preceding heuristic rules on the way a mesh should vary to obtain good numeric results. Currently, we use visual quality to decide when to cease improving the mesh, but this can be replaced by examining the changes in mesh quality function. An analogue of this quality measure can be developed for higher-dimensional meshes. As stated, the essence of our implementation is the Kohonen self-organizing neural network algorithm.10 This algorithm allows a network to choose its weights in such a way as to fix its topological elements in weight space in such a way as to mimic as closely as possible the arrangement of sample input data. In other words, the neural network becomes a representative map of the sample data information. This is exploited by us, in order to arrange for the placement of the finite-element mesh, by identifying the mesh nodes with neural nodes and identifying the weight space with the physical space of the domain, thereby causing the network to be an approximation of the density function. This happens by randomly choosing sample points of the domain as input to the self-organizing neural network in direct correspondence to the density function (see below for further details). Thus the “coordinization” of the mesh is carried out automatically by the Kohonen algorithm, with the only input necessary being sample points of the domain chosen randomly to reflect the desired density function. The mesh then “self-organizes” to make the best possible representation of the domain by the mesh elements. It turns out that such a straightforward procedure has some difficulties in our context, which required some sophistications and modifications when adapting the algorithm. That is, 1. A finite-element mesh has to fit exactly inside the domain and reach the boundaries. 2. Computational requirements are somewhat high. 3. Nonconvex domains require special techniques. Our implementation dealt with these problems with the following techniques: 1. The algorithm actually uses an interweaving of several Kohonen algorithms. There is an interweaving of a two-dimensional Kohonen algorithm on the mesh with a one-dimensional Kohonen algorithm on each connected component of the boundary. More will be stated in the sequel, but note that this suggests the correct generalization to higher dimensions. 2. We used an adaptation to the Kohonen algorithm suggested by Tzvi and Iaakov16 that resulted in an increase in speed of around 75% without degradation of performance. 3. A special modification to the algorithm was made to handle nonconvex domains properly. This algorithm is not as advanced as the convex one, but results are already comparable with the PLTMG standard. 3 THE NEURAL NETWORK ALGORITHMS 3.1 The basic Kohonen algorithm and the Tzvi-Iaacov speedup Complete details of the Kohonen algorithm can be found in ref. 10 or ref. 6. As stated earlier, the Kohonen algorithm is designed to allow a given set of neurons organized with a topology to “self-organize” itself in such a way as to make 236 L. Manevitz, M. Yousef & D. Givoli (1) each neuron equiprobabilistically likely to respond to an impulse in the data set and (2) the topology preserved in the sense that nearby neurons will respond to nearby impulses. In our context, this algorithm works as follows: 1. A particular topology of nodes (or neurons) is chosen. Each node is associated with a location in a space (e.g., by cartesian coordinates). In neural network terminology, the weight space of the neuron is associated with the cartesian coordinates. 2. Input is a sequence of points in the space, chosen randomly according to the desired distribution. 3. For each input, a point in the topology is selected by a form of “winner-takes-all” competition [for example, (roughly speaking) the closest (in the current coordinates of the nodes)]. 4. The location of the chosen node is adjusted, as is the coordinates of all nodes within a certain neighborhood of this chosen node. The adjustment is a movement of the chosen node toward the input point. The amount of the movement is determined by a parameter α that decreases dynamically as the algorithm proceeds in time (see Figure 1 and Figure 2). If the input points form a faithful representation of the domain, then the nodes eventually form a map approximating the given domain. Note, however, that because of the probabilistic nature of this scheme, meshes generated for symmetric regions will not be perfectly symmetric. This general algorithm is independent of the dimensionality. What is needed is a representation of the nodes in a space and an appropriate generator of input points. Tzvi and Iaakov16 suggested an adaptation of this basic algorithm that speeds things up substantially. Essentially (see refs. 16 and 17 for details), the network is broken into subregions, and each region is represented by one member, roughly from the center of the subregion. Then the winner-takes-all competition proceeds first by competition only among the representatives and then among the members of the chosen region (see Figure 3). We found that the use of one level of this adaptation resulted in a 75% improvement in the speed of the Kohonen algorithm without any change in quality of results. (In principle, this algorithm can work with several levels of representation, but we did not need this.) 3.2 Adaptation to our problem 3.2.1 Density functions In principle, then, one needs only the choice of a network and a density function. For example, a uniform density function is easily generated by first enclosing the body inside a circumscribing rectangle and choosing an x and y coordinate by one of the standard random number generators (we used random from the standard C library package) and then rejecting any points that fall outside the body itself. One then simply uses the randomly chosen points in the region as input to the Kohonen algorithm network. In the same way, any computable density function on the domain can be input to the Kohonen network. We implemented nonuniform densities by taking the composition of squares of uniform ones centered at various “hot spots.” In our system, a user indicates areas of interest with a pointing device (a graphic mouse), and then the appropriate density functions are generated automatically as the square of the uniform density function on [0, 1] with (0, 0) identified with the hot spot (see Figure 4). We point out that any other method of generating the density function is acceptable. Note that often in solving partial differential equations with boundary conditions, if the submitted data (e.g., boundary condition, geometry, load f , etc.) exhibit some nonsmooth behavior, as in the case of cracks, corners, or concentrated loads, this gives a hint as to what the density function should look like. In an elliptic partial differential equation, these are the only cases where a nonuniform density is needed, and so we have a good idea in advance of solution where the hot spots are. On the other hand, for hyperbolic partial differential equations, the solution may be nonsmooth even if all the data are smooth, as in the appearance of shocks in fluid-flow problems. In such cases, it is more complicated to predict in advance where the hot spots are, but one can resort to adaptive methods (e.g., solving first with a sparser uniform mesh and then resolving with a new mesh relating to the less accurate solution). 3.2.2 Measures of mesh quality To test the current version of the algorithm, we used two measures of success. First, we used the following as our measure of the quality of a mesh, reflecting the heuristic rules given in Section 1 (here for a given element ae refers to the largest side of the quadrilateral and be refers to the smallest: Ee 1 = 1 − be/ae giving a measure of the aspect ratio, Ee 2 = max4 i=1 |1 − 2 π anglei | measuring how close all the quadrilateral angles are to 90 degrees, and Ee 3 = maxneighbors |1 − min e n=1{a/a n, ae n/a}| measuring how similar an element is to its neighboring ones.) Then, allowing different positive weights wi to the different measures, we have
منابع مشابه
Automatic Mesh Generation (for Finite Element Method) Using Self-Organizing Neural Networks*
In the paper, we present a method to automatically fit a topological grid (mesh) to the geometry of a given domain in such a way as place the density of nodes in close correspondence with a given density across the domain. This is important in, for example, the preprocessing of the finite element method; where one wants to get the best possible approximation to a solution of a partial different...
متن کاملA finite-element mesh generator based on growing neural networks
A mesh generator for the production of high-quality finite-element meshes is being proposed. The mesh generator uses an artificial neural network, which grows during the training process in order to adapt itself to a prespecified probability distribution. The initial mesh is a constrained Delaunay triangulation of the domain to be triangulated. Two new algorithms to accelerate the location of t...
متن کاملSteel Consumption Forecasting Using Nonlinear Pattern Recognition Model Based on Self-Organizing Maps
Steel consumption is a critical factor affecting pricing decisions and a key element to achieve sustainable industrial development. Forecasting future trends of steel consumption based on analysis of nonlinear patterns using artificial intelligence (AI) techniques is the main purpose of this paper. Because there are several features affecting target variable which make the analysis of relations...
متن کاملEvaluation of Ultimate Torsional Strength of Reinforcement Concrete Beams Using Finite Element Analysis and Artificial Neural Network
Due to lack of theory of elasticity, estimation of ultimate torsional strength of reinforcement concrete beams is a difficult task. Therefore, the finite element methods could be applied for determination of strength of concrete beams. Furthermore, for complicated, highly nonlinear and ambiguous status, artificial neural networks are appropriate tools for prediction of behavior of such states. ...
متن کاملGait Based Vertical Ground Reaction Force Analysis for Parkinson’s Disease Diagnosis Using Self Organizing Map
The aim of this work is to use Self Organizing Map (SOM) for clustering of locomotion kinetic characteristics in normal and Parkinson’s disease. The classification and analysis of the kinematic characteristics of human locomotion has been greatly increased by the use of artificial neural networks in recent years. The proposed methodology aims at overcoming the constraints of traditional analysi...
متن کاملGait Based Vertical Ground Reaction Force Analysis for Parkinson’s Disease Diagnosis Using Self Organizing Map
The aim of this work is to use Self Organizing Map (SOM) for clustering of locomotion kinetic characteristics in normal and Parkinson’s disease. The classification and analysis of the kinematic characteristics of human locomotion has been greatly increased by the use of artificial neural networks in recent years. The proposed methodology aims at overcoming the constraints of traditional analysi...
متن کامل